Skip to main content Thatcherwood

Reminescing pre-corporate Artificial Intelligence

Published: 2024-06-24
Updated: 2024-06-24

Remember the days before AI? You probably do. If you do not, then this post has lived a lot longer then I can imagine. You probably do not realize that AI has been around in some way or another since the 20th century. Remember the Xbox 360 Kinnect? How did it know where your body was? It used a (Older) Neural Network at the time. That was almost 20 years ago!

AI has been around a lot longer then you think (Depending on how you define it). For example, the Eliza chatbot was developed in 1966 and developed a cult following. However, unlike today’s Neural Network based AI, Eliza was built off of a variety of rules and patterns.

Enough about those days though, I want to reminence “Modern” AI as I first entered it. Back in 2019, my first forray into “AI” was the discovery of AIDungeon. Textual AI was the best computers could do those days. (And even then, they weren’t very good). Getting the computer to write you a short story was all the rage back then. I spent a lot of time adventuring using OpenAI’s AIDungeon platform. The game’s premise was simple: You would select theme, then choose a short world description written by a author, then finally write from the 2nd-person perspective what your character did. The “AI” would make stuff up as you moved along in the procedurally-generated world. You could save keywords for long-term memory, and even play with friends. It was like playing Dungeons and Dragons, but with the worst possible Dungeon Master possible. “AI” back then wasn’t censored and was really, really bad. It would take around 5 minutes to get a comprehensible response from the “AI”. Regardless of the wait, it was exhilirating to have something write for you.

Looking back, I can see how AIDungeon was a win-win situation. OpenAI got players to train their AI while people got to play around with something that was about to change the world.

In those days, the artwork created was undeniably fascinating. Most images were coagulations of disparate objects merging into each other. This vague look of familiarity, but alieness is undoubtedly what inspired the Backrooms trend. Yet, despite the obvious potential and undertones, the media didn’t catch on to AI until years later. “AI” was a niche. Not a productivity product. In those times, the idea of AI replacing artists was simply a dream. They were too esoteric, weird, and random to be of actual use. But at the same time, they output some really unique results.

In some ways, I cringe when I go to ChatGPT and see its output. Corporate slop for the most part. Once “AI” became mainstream, people wanted to use it as a thinking machine. Yet I think that it shines far more when writing stories and creating dreamy (If random) artwork.

I’m sure that unique randomness is still available, but not without getting a accountant involved. Just like many other good things, “AI” went from a odd curiosity/hobby to a product. I wish I knew of a good “AI” product today, but I’m more removed from “AI” then I was in 2019. It just is not what it used to be.

The world of “AI” was different back in 2019. In those days, you could grab other people’s trained models, and then run them directly on Google Collab GPU’s completely for free. Back then, it was not uncommon to get programs, scripts, trained models, and resources directly from academics/researchers. There were curated lists of links that led to people’s Collab Notebooks. Some of the larger trained models needed to be torrented. “AI” was not really a product yet. Sure, one could buy OpenAI’s Griffin model (Or whatever the AIDungeon upgrade was called), but there were plenty of community efforts that were just about as good. It was much more of a community back then.

The trained models made by the community were a lot more specialized, too. I could utilize one model for writing academic papers, then switch to another one for writing fantasy stories. Thanks to this specialization, “AI” did not feel as corporatelized and bland like today’s iterations.

Now that I’m done painted pre-covid with Rose-Tinted Glasses, I wanted to share a photo I generated on September 16th, 2021.


The below is part of a (Horribly) written blogpost I made when first setting up a Website on my Virtual private Server. I just wanted to keep it around as a keepsake.

A AI generated image of a plague doctor amongst burning sands.

The below image is not any crazy artwork some artist spent many hours working on. It was generated in less then ten minutes by the simple text prompt "Plague Doctor upon a burning beach near the ocean." How did I make such a background so quickly? With tools like VQ-ganCLIP and OpenAI's GPT-2 model. These programs are the backbone to many AI projects today. Unfortanely, most projects are hidden behind different organizations, or are a university student's major project. Very few receive much media attention. I'd like to bring some awareness to you chaps, so I am sharing the following document that lists a huge amount of these tools for easy access here. There is a simple guide to using AI in artwork here to generate artwork like the image above. The best part is, all the materials needed to generate these images are accesible through a google account. (So yes, you can do this without your own GPU).